35 research outputs found

    Deep complementary features for speaker identification in TV broadcast data

    No full text
    International audienceThis work tries to investigate the use of a Convolutional Neu-ral Network approach and its fusion with more traditional systems such as Total Variability Space for speaker identification in TV broadcast data. The former uses spectrograms for training, while the latter is based on MFCC features. The dataset poses several challenges such as significant class imbalance or background noise and music. Even though the performance of the Convolutional Neural Network is lower than the state-of-the-art, it is able to complement it and give better results through fusion. Different fusion techniques are evaluated using both early and late fusion

    Using eigenvoices and nearest-neighbours in HMM-based cross-lingual speaker adaptation with limited data

    Get PDF
    Cross-lingual speaker adaptation for speech synthesis has many applications, such as use in speech-to-speech translation systems. Here, we focus on cross-lingual adaptation for statistical speech synthesis systems using limited adaptation data. To that end, we propose two eigenvoice adaptation approaches exploiting a bilingual Turkish-English speech database that we collected. In one approach, eigenvoice weights extracted using Turkish adaptation data and Turkish voice models are transformed into the eigenvoice weights for the English voice models using linear regression. Weighting the samples depending on the distance of reference speakers to target speakers during linear regression was found to improve the performance. Moreover, importance weighting the elements of the eigenvectors during regression further improved the performance. The second approach proposed here is speaker-specific state-mapping, which performed significantly better than the baseline state-mapping algorithm both in objective and subjective tests. Performance of the proposed state mapping algorithm was further improved when it was used with the intralingual eigenvoice approach instead of the linear-regression based algorithms used in the baseline system.European Commission ; TUBITA

    LIG at MediaEval 2015 Multimodal Person Discovery in Broadcast TV Task

    Get PDF
    ABSTRACT In this working notes paper the contribution of the LIG team (partnership between Univ. Grenoble Alpes and Ozyegin University) to the Multimodal Person Discovery in Broadcast TV task in MediaEval 2015 is presented. The task focused on unsupervised learning techniques. Two different approaches were submitted by the team. In the first one, new features for face and speech modalities were tested. In the second one, an alternative way to calculate the distance between face tracks and speech segments is presented. It also had a competitive MAP score and was able to beat the baseline

    SAS: A Speaker Verification Spoofing Database Containing Diverse Attacks

    Get PDF
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.This paper presents the first version of a speaker verification spoofing and anti-spoofing database, named SAS corpus. The corpus includes nine spoofing techniques, two of which are speech synthesis, and seven are voice conversion. We design two protocols, one for standard speaker verification evaluation, and the other for producing spoofing materials. Hence, they allow the speech synthesis community to produce spoofing materials incrementally without knowledge of speaker verification spoofing and anti-spoofing. To provide a set of preliminary results, we conducted speaker verification experiments using two state-of-the-art systems. Without any anti-spoofing techniques, the two systems are extremely vulnerable to the spoofing attacks implemented in our SAS corpus.EPSRC ; CAF ; TÜBİTA

    Visualizing Pausanias’s <i>Description of Greece</i> with contemporary GIS

    Get PDF
    This progress article focuses on an overview of the potential and challenges of using contemporary Geographic Information System (GIS) applications for the visual rendering and analysis of textual spatial data. The case study is an ancient traveling narrative, Pausanias’s Description of Greece (Periegesis Hellados) which was written in the second century CE. First, we describe the process of converting the volumes to spatial data using a customized version of the open-source digital semantic annotation platform Recogito. Then the focus shifts to the implementation of collected and organized spatial data to a number of GIS applications: namely Google Maps, DARIAH Geo-Browser, Gephi, Palladio and ArcGIS. Through empirical experimentation with spatial data and their implementation in different platforms, our paper charts the ways in which contemporary GIS applications may be implemented to cast new light on ancient understandings of identity, space, and place

    Heritage metadata: a digital <i>Periegesis</i>

    Get PDF
    This chapter aims to review the state-of-field for using digital heritage metadata in the context of GIS mapping and LOD and to identify key chal- lenges from both theoretical and practical perspectives. The chapter illus- trates these challenges and how they can be dealt with a case study of a project using cutting-edge methodologies, the Digital Periegesis project. This allows us to answer research questions about how to organise and link textual data in relation to archaeological material culture, generally, and with regard to Pausanias’s Description of Greece and places mentioned by him, specifically. This endeavour makes it possible to approach an overarching purpose and address larger issues related to information organisation from epistemological and technical perspective

    Multisensor Segmentation-based Noise Suppression for Intelligibility Improvement in MELP Coders

    Get PDF
    This thesis investigates the use of an auxiliary sensor, the GEMS device, for improving the quality of noisy speech and designing noise preprocessors to MELP speech coders. Use of auxiliary sensors for noise-robust ASR applications is also investigated to develop speech enhancement algorithms that use acoustic-phonetic properties of the speech signal. A Bayesian risk minimization framework is developed that can incorporate the acoustic-phonetic properties of speech sounds and knowledge of human auditory perception into the speech enhancement framework. Two noise suppression systems are presented using the ideas developed in the mathematical framework. In the first system, an aharmonic comb filter is proposed for voiced speech where low-energy frequencies are severely suppressed while high-energy frequencies are suppressed mildly. The proposed system outperformed an MMSE estimator in subjective listening tests and DRT intelligibility test for MELP-coded noisy speech. The effect of aharmonic comb filtering on the linear predictive coding (LPC) parameters is analyzed using a missing data approach. Suppressing the low-energy frequencies without any modification of the high-energy frequencies is shown to improve the LPC spectrum using the Itakura-Saito distance measure. The second system combines the aharmonic comb filter with the acoustic-phonetic properties of speech to improve the intelligibility of the MELP-coded noisy speech. Noisy speech signal is segmented into broad level sound classes using a multi-sensor automatic segmentation/classification tool, and each sound class is enhanced differently based on its acoustic-phonetic properties. The proposed system is shown to outperform both the MELPe noise preprocessor and the aharmonic comb filter in intelligibility tests when used in concatenation with the MELP coder. Since the second noise suppression system uses an automatic segmentation/classification algorithm, exploiting the GEMS signal in an automatic segmentation/classification task is also addressed using an ASR approach. Current ASR engines can segment and classify speech utterances in a single pass; however, they are sensitive to ambient noise. Features that are extracted from the GEMS signal can be fused with the noisy MFCC features to improve the noise-robustness of the ASR system. In the first phase, a voicing feature is extracted from the clean speech signal and fused with the MFCC features. The actual GEMS signal could not be used in this phase because of insufficient sensor data to train the ASR system. Tests are done using the Aurora2 noisy digits database. The speech-based voicing feature is found to be effective at around 10 dB but, below 10 dB, the effectiveness rapidly drops with decreasing SNR because of the severe distortions in the speech-based features at these SNRs. Hence, a novel system is proposed that treats the MFCC features in a speech frame as missing data if the global SNR is below 10 dB and the speech frame is unvoiced. If the global SNR is above 10 dB of the speech frame is voiced, both MFCC features and voicing feature are used. The proposed system is shown to outperform some of the popular noise-robust techniques at all SNRs. In the second phase, a new isolated monosyllable database is prepared that contains both speech and GEMS data. ASR experiments conducted for clean speech showed that the GEMS-based feature, when fused with the MFCC features, decreases the performance. The reason for this unexpected result is found to be partly related to some of the GEMS data that is severely noisy. The non-acoustic sensor noise exists in all GEMS data but the severe noise happens rarely. A missing data technique is proposed to alleviate the effects of severely noisy sensor data. The GEMS-based feature is treated as missing data when it is detected to be severely noisy. The combined features are shown to outperform the MFCC features for clean speech when the missing data technique is applied.Ph.D.Committee Chair: David V. Anderson; Committee Member: Levent Degertekin; Committee Member: Mark A. Clements; Committee Member: Paul Hasler; Committee Member: Thomas Barnwel
    corecore